Pre-training and multi-task training for federated keyword extraction

نویسندگان

چکیده

The generalization ability of supervised model is relatively weak in keyword extraction technology. In order to effectively improve the robustness and accuracy model, key overcome this problem collect more data for training process. However, text as an privacy information, it harder collect. To solve problem, we apply federal learning use user locally performance. integrate unlabeled utterances semantic parsing proposed a Pre-training multi-task based federated model. models learn local information via unsupervised setting core learns setting.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Defensive Collaborative Multi-task Training

Deep neural network (DNNs) has shown impressive performance on hard perceptual problems. However, researchers found that DNNbased systems are vulnerable to adversarial examples which contain specially crafted humans-imperceptible perturbations. Such perturbations cause DNN-based systems to mis-classify the adversarial examples, with potentially disastrous consequences where safety or security i...

متن کامل

Federated Multi-Task Learning

Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...

متن کامل

Keyword and metadata extraction from pre-prints

In this paper we study how to provide metadata for a pre-print archive. Metadata includes, but is not limited to, title, authors, citations, and keywords, and is used to both present data to the user in a meaningful way, and to index and cross-reference the pre-prints. We are particularly interested in studying different methods to obtain metadata for a pre-print. We have developed a system tha...

متن کامل

Multi-view and multi-task training of RST discourse parsers

We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this mu...

متن کامل

Strength for Task Training

.................................................................................................................................................... i List of Figures ....................................................................................................................................... vi List of Tables................................................................................

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of physics

سال: 2021

ISSN: ['0022-3700', '1747-3721', '0368-3508', '1747-3713']

DOI: https://doi.org/10.1088/1742-6596/1978/1/012055